Representing Spatial Information through Multimodal Interfaces
نویسنده
چکیده
The research discussed here is a component of a larger study to explore the accessibility and usability of spatial data presented through multiple sensory modalities including haptic, auditory, and visual interfaces. Geographical Information Systems (GIS) and other computer-based tools for spatial display predominantly use vision to communicate information to the user, as sight is the spatial sense par excellence. Ongoing research is exploring the fundamental concepts and techniques necessary to navigate through multimodal interfaces, which are user, task, domain, and interface specific. This highlights the necessity for both a conceptual / theoretical schema, and the need for extensive usability studies. Preliminary results presented here exploring feature recognition, and shape tracing in non-visual environments indicate multimodal interfaces have a great deal of potential for facilitating access to spatial data for blind and visually impaired persons. The research is undertaken with the wider goals of increasing information accessibility and promoting “universal
منابع مشابه
Enhancing Multimedia Interfaces with Intelligence
Current software products such as spreadsheets are beginning to include automated graphical design and input checking heuristics to provide added automated functionality. The use of such rules requires higher levels of abstract meaning representation than raw media. They also rely upon representing contextual information describing the domain, task, user and current dialogue. With these technol...
متن کاملTOWARDS MULTIMODAL CONTENT REPRESENTATION Discussion paper
Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance: the understanding of possibly imprecise, partial or ambiguous multimodal input; the generation of coordinated, cohesive, and coherent multimodal presentatio...
متن کاملTowards Multimodal Content Representation
Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance: the understanding of possibly imprecise, partial or ambiguous multimodal input; the generation of coordinated, cohesive, and coherent multimodal presentatio...
متن کاملUnification-based Multimodal Integration
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for mapbased tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2002